Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to determine if an early cardiac arrest warning system is ready to be integrated into a care setting is very different from the type of explanation required for a loan applicant to help determine the actions they might need to take to make their application successful. Unfortunately, there is a lack of standardization when it comes to properties of explanations: different papers may use the same term to mean different quantities, and different terms to mean the same quantity. This lack of a standardized terminology and categorization of the properties of ML explanations prevents us from both rigorously comparing interpretable machine learning methods and identifying what properties are needed in what contexts. In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties. In doing so, we enable more informed selection of task-appropriate formulations of explanation properties as well as standardization for future work in interpretable machine learning.
translated by 谷歌翻译
对不确定度和鲁棒性的高质量估计对于众多现实世界的应用来说至关重要,特别是对于深入学习,这是利用许多部署的ML系统。因此,比较改善这些估计的技术的能力对于研究和实践相似非常重要。然而,由于一系列原因,通常缺乏方法的竞争比较,包括:计算广泛调整的可用性,加入足够多的基线,以及用于再现性的具体文件。在本文中,我们介绍了不确定性的基线:在各种任务中的标准和最先进的深度学习方法的高质量实现。从本撰写中,集合跨越9项方法,每个方法都有至少5个度量。每个基线都是一个独立的实验管道,易于可重复使用和可伸缩的部件。我们的目标是提供具有新方法或应用的实验的即时出发点。此外,我们还提供模型检查点,实验输出为Python笔记本,以及用于比较结果的排行榜。代码在https://github.com/google/uncertainty-baselines。
translated by 谷歌翻译
观察到对于某些NLP任务,例如语义角色预测或主题拟合估计,随机嵌入性能以及预处理的嵌入方式,我们探索了哪些设置允许并检查大多数学习的编码:语义角色,语义角色,语义角色嵌入或``网络''。我们发现细微的答案,具体取决于任务及其与培训目标的关系。我们研究了多任务学习中的这些表示学习方面,在这些方面,角色预测和角色填充是受监督的任务,而几个主题拟合任务不在模型的直接监督之外。我们观察到某些任务的质量得分与培训数据规模之间的非单调关系。为了更好地理解此观察结果,我们使用这些任务的每个动力版本来分析这些结果。
translated by 谷歌翻译
我们考虑在无法访问网络培训数据(例如由于隐私或安全问题)的情况下为神经网络产生解释。最近,已经提出了$ \ Mathcal {i} $ - 网络是一种无样品后全球模型可解释性的方法,不需要访问培训数据。他们将解释作为机器学习任务,将网络表示(参数)映射到可解释功能的表示。在本文中,我们将$ \ Mathcal {i} $ - 网络框架扩展到标准和软决策树作为替代模型的情况。我们提出了相应的$ \ Mathcal {i} $ - 净输出层的合适决策树表示和设计。此外,我们通过在生成$ \ Mathcal {i} $ - NET的培训数据时考虑更现实的分布来制作适用于现实世界任务的NETS $ \ MATHCAL {I} $ - NETS。我们对传统的全球,事后解释性方法进行经验评估我们的方法,并表明当无法访问培训数据时,它可以取得优势。
translated by 谷歌翻译
Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked open data resource ConceptNet that is particularly well suited to be used with modern NLP techniques such as word embeddings.ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expertcreated resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use.When ConceptNet is combined with word embeddings acquired from distributional semantics (such as word2vec), it provides applications with understanding that they would not acquire from distributional semantics alone, nor from narrower resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies.• A net is used for catching fish.• "Leaves" is a form of the word "leaf ".• The word cold in English is studený in Czech.• O alimento é usado para comer [Food is used for eating].
translated by 谷歌翻译